431 research outputs found

    VP Ellipsis and Semantic Identity

    Get PDF
    While it is generally agreed that an elliptical Verb Phrase must be identical to its antecedent, the precise formulation of the identity condition is controversial. I present a semantic identity condition on VP ellipsis: the elided VP must have the same meaning as its antecedent. I argue that a semantic identity condition is superior to a syntactic condition on both empirical and theoretical grounds. In addition, I show that the proposed condition differs significantly from previously proposed semantic conditions, in that other approaches do not take into account the dynamic nature of semantic representation

    Computational Accounts of Music Understanding

    Get PDF
    We examine various computational accounts of aspects of music understanding. These accounts involve programs which can notate melodies based on pitch and duration information. It is argued that this task involves significant musical intelligence. In particular, it requires an understanding of basic metric and harmonic relations implicit in the melody. We deal only with single-voice, tonal melodies. While the task is a limited one, and the programs give only partial solutions to this task, we argue that this represents a first step towards a computational realization of significant aspects of musical intelligence

    Ellipsis and Discourse (Dissertation Proposal)

    Get PDF
    In human discourse there is much that is communicated without being explicitly stated. The grammar of natural language provides a broad array of mechanisms for such implicit communication. One example of this is verb phrase ellipsis, in which a verb phrase is elided, its position marked only by an auxiliary verb. Such elliptical constructions are generally easily and unambiguously understood. In this proposal I will attempt to explain how this is accomplished

    Verb Phrase Ellipsis: Form, Meaning, and Processing

    Get PDF
    The central claim of this dissertation is that an elliptical VP is a proform. This claim has two primary consequences: first, the elliptical VP can have no internal syntactic structure. Second, the interpretation of VP ellipsis must be governed by the same general conditions governing other proforms, such as pronouns. The basic condition governing the interpretation of a proform is that it must be semantically identified with its antecedent. A computational model is described in which this identification is mediated by store and retrieve operations defined with respect to a discourse model. Because VP ellipsis is treated on a par with other proforms, the ambiguity arising from “sloppy identity” becomes epiphenomenal, resulting from the fact that the store and retrieve operations are freely ordered. A primary argument for the proform theory of VP ellipsis concerns syntactic constraints on variables within the antecedent. I examine many different types of variables, including reflexives, reciprocals, negative polarity items, and wh-traces. In all these cases, syntactic constraints are not respected under ellipsis. This indicates that the relation governing VP ellipsis is semantic rather than syntactic. In further support of the proform theory, I show that there is a striking similarity in the antecedence possibilities for VP ellipsis and those for pronouns. Two computer programs demonstrate the claims of this dissertation. One program implements the semantic copying required to resolve VP ellipsis, demonstrating the correct set of possible readings for the examples of interest. The second program selects the antecedent for a VP ellipsis occurrence. This program has been tested on several hundred examples of VP ellipsis, automatically collected from corpora

    A Uniform Syntax and Discourse Structure: the Copenhagen Dependency Treebanks

    Get PDF
    I present arguments in favor of the Uniformity Hypothesis: the hypothesis that discourse can extend syntax dependencies without conflicting with them. I consider arguments that Uniformity is violated in certain cases involving quotation, and I argue that the cases presented in the literature are in fact completely consistent with Uniformity. I report on an analysis of all examples in the Copenhagen Dependency Treebanks involving violations of Uniformity. I argue that they are in fact all consistent with Uniformity, and conclude that the CDT should be revised to reflect this

    CAN YOU TRUST ONLINE RATINGS? EVIDENCE OF SYSTEMATIC DIFFERENCES IN USER POPULATIONS

    Get PDF
    Do user populations differ systematically in the way they express and rate sentiment? We use large collections of Danish and U.S. reviews to investigate this qustion, and we find evidence of important systematic differences: first, positive ratings are far more common in the U.S. data than in the Danish data. Second, Danish reviewers tend to under-rate their own positive reviews compared to U.S. reviewers. This has potentially far-reaching implications for the interpretation of user ratings, the use of which has exploded in recent year

    Learning Mixtures of Gaussians in High Dimensions

    Full text link
    Efficiently learning mixture of Gaussians is a fundamental problem in statistics and learning theory. Given samples coming from a random one out of k Gaussian distributions in Rn, the learning problem asks to estimate the means and the covariance matrices of these Gaussians. This learning problem arises in many areas ranging from the natural sciences to the social sciences, and has also found many machine learning applications. Unfortunately, learning mixture of Gaussians is an information theoretically hard problem: in order to learn the parameters up to a reasonable accuracy, the number of samples required is exponential in the number of Gaussian components in the worst case. In this work, we show that provided we are in high enough dimensions, the class of Gaussian mixtures is learnable in its most general form under a smoothed analysis framework, where the parameters are randomly perturbed from an adversarial starting point. In particular, given samples from a mixture of Gaussians with randomly perturbed parameters, when n > {\Omega}(k^2), we give an algorithm that learns the parameters with polynomial running time and using polynomial number of samples. The central algorithmic ideas consist of new ways to decompose the moment tensor of the Gaussian mixture by exploiting its structural properties. The symmetries of this tensor are derived from the combinatorial structure of higher order moments of Gaussian distributions (sometimes referred to as Isserlis' theorem or Wick's theorem). We also develop new tools for bounding smallest singular values of structured random matrices, which could be useful in other smoothed analysis settings

    Re-binding and the Derivation of Parallelism Domains

    Get PDF
    N/
    corecore